-
Notifications
You must be signed in to change notification settings - Fork 263
[WIP] Futures/coroutines for services #135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Closed
cb1b4a3 to
50f0a27
Compare
Moved wait set code to its own class for code reuse Added timeout_sec_to_nsec() wait_for_service() implemented with timers Added unit tests for timeout_sec_to_nsec() Added test for WaitSet class Use negative timeouts to mean block forever Double quotes to single quotes Added wait_for_service() tests and fixed bugs it caught Eliminate blind exception warning Reduce flakiness of test by increasing time to 0.1s Comment says negative timeouts block forever Use :returns: Move add_subscriptions() arugments -> arguments Daemon as keyword arg Remove unnecessary namespace argument Use S_TO_NS in test More tests using S_TO_NS Use monotonic clock for testing timer time Increased test timeout by 30 seconds CheckExact -> IsValid Fixed wait_set not clearing ready_pointers Remove unnecessary namespace keyword arg Non-blocking wait remove expression that always evaluates to True Raise ValueError on invalid capsule Simplified timeout_sec_to_nsec Added 'WaitSet.destroy()' and made executor use it GraphListener periodically checks if rclpy is shutdown Misc fixes after pycapsule names Wait set class always clears entities before waiting Remove underscore on import Reformat timeout line Use () when raising exceptions Removed _ on imports Executor optimizations ~5% less overhead in wait_for_ready_callbacks() Fixed executor yielding entities to wrong node Also refactored some code to a sub-generator Use list() only where necessary Docstring in imperitive mood Executors reuse iterator moved some wait_set code into C Avoid another list comprehension Replaced WaitSet with C code in executor Remove test code Use lists instead of set Use locally defined function instead of member function Shorten code using macro Move everything to new wait_set code protect against ImportError.path being None (#134) Free memory when things don't go as planned (#138)
Python library _rclpy could not be imported due to rclpy_sigint.dll not being on PATH. This uses 'APPEND_LIBRARY_DIRS' argument to ament_add_nose_test to add that directory to the path.
All wait set work is turned into a task before being yielded Removed special handling on guard condition that's no longer needed
50f0a27 to
0f0e20a
Compare
Merged
Merged
Contributor
Author
YuanYuYuan
pushed a commit
to YuanYuYuan/rclpy
that referenced
this pull request
Nov 12, 2025
Add an example for build.rs usage
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Note:Ignore this PR. A lot of it has already been split into separate pull requests. I'll close this when there is nothing left to split.
This is part of service feature parity. It is not ready for review, but feedback is welcome if you can tolerate the chaotic writing below. Once #127 is merged this will need to be rebased onto master (omitting d980051).
Futures are a way to support multiple simultaneous client requests. Coroutines are a way to support futures inside of callbacks in an executor. Futures need a task running in parallel to update the result. A task can get to the executor via
await futureFuture.__await__can invoke the task, and yield if the result is not ready yet.async def)rclcpp::Executor::spin_until_future_complete(future)spin_until_future_completecould add that task to the executor.Current state:
async def/awaitonly. Does not support generator/yield fromcoroutines (but it could)wait_for_ready_callbacksyields new callbacks first, then old coroutines that yielded the last time they were run.wait_for_ready_callbacksuntilStopIterationor old coroutines may never get resumed.wait_for_ready_callbacksshould onlyyieldnew tasks, and another method should yield old ones. Custom executors should be responsible for implementing their own old-vs-new task strategy.TaskinstanceTaskclass hides the difference between a normal function and a coroutine from an executorTaskdone callbacks get executed in series after the original callback/coroutine finishesTaskis greedy and executes as many done callbacks as it can, pausing only if one is a coroutine that yields.asyncio.Taskdone callbacks get scheduled in the event loop, but this would be a problem in rclpy because it could mean anotherrcl_waitbefore executing a done callback.asyncio.Futureisawait-able but not thread safe and expects an event loop reference (though partially works without it)concurrent.futures.Futureis thread safe but notawait-able (Maybe could be subclassed for this)asyncio.Taskis a subclass of future, so maybe aTaskcould be it's own future too?connects to #123